Research shows that while teens face risks on social media, they also find peer support, particularly through direct messaging. (Pexels)News 

AI could assist in striking the right balance between protection and privacy for teenagers on social media.

On January 9, 2024, Meta revealed its plan to safeguard teenage users by implementing restrictions on Instagram and Facebook. The company aims to prevent them from accessing potentially harmful content, such as material associated with suicide and eating disorders. This decision follows mounting pressure from federal and state authorities, urging social media platforms to prioritize the safety of teenagers.

At the same time, teenagers use social media to seek support from their peers, which they cannot get elsewhere. Efforts to protect teenagers can also inadvertently make it harder to get help.

In recent years, Congress has organized numerous hearings on social media and the risks to young people. The CEOs of Meta, X — formerly known as Twitter — TikTok, Snap and Discord are scheduled to testify before the Senate Judiciary Committee on January 31, 2024 about their efforts to protect minors from sexual exploitation.

Tech companies “finally must admit their failure to protect children,” according to a statement from the committee’s chairman and ranking member, Sens. Dick Durbin (D-Ill.) and Lindsey Graham, before the hearing. R-S.C.), respectively.

I am a researcher who studies network security. My colleagues and I have been researching teenagers’ social media interactions and the platforms’ effectiveness in protecting users. Research shows that while teenagers face dangers on social media, they also find peer support, especially through direct communication. We have identified a number of steps that social media platforms can take to protect users while protecting their privacy and independence online.

What children face

The prevalence of teenage risks in social media is well established. These risks range from harassment and bullying to poor mental health and sexual abuse. Research has shown that companies like Meta have known their platforms exacerbate mental health problems, helping to make youth mental health one of the priorities of the US Surgeon General.

Much of the youth online safety research comes from self-reported data, such as surveys. There is a need for more research into young people’s actual private interactions and their perceptions of online risks. To address this need, my colleagues and I collected a large dataset of youth Instagram activity, including more than 7 million direct messages. We asked young people to comment on their own conversations and identify messages that made them feel uncomfortable or unsafe.

With the help of this material, we found that direct interaction can be crucial for young people who are looking for support from everyday life to mental health problems. Our findings suggest that young people used these channels to discuss their public interactions in more depth. Based on mutual trust in the settings, teenagers felt safe to ask for help.

According to research, the privacy of online discourse plays an important role in the online safety of young people, and at the same time, a significant amount of harmful interaction on these platforms comes in the form of private messages. Harmful messages reported by users in our dataset included harassment, sexual messages, sexual solicitation, nudity, pornography, hate speech, and the sale or promotion of illegal activities.

However, it has become more difficult for platforms to use automated technology to detect and prevent online risks from teenagers due to pressure on platforms to protect user privacy. For example, Meta has implemented end-to-end encryption for all messages on its platforms to ensure that the content of messages is secure and only accessible to the participants of the discussions.

In addition, Meta’s measures to prevent suicide and eating disorder content keep the content out of public posts and search, even if a teen’s friend posted it. This means that the teenager who shared the content is left alone without the support of their friends and peers. In addition, Meta’s content strategy does not take into account the dangerous interactions of teenagers’ private conversations online.

Achieving balance

So the challenge is to protect younger users without violating their privacy. To do this, we conducted a study to find out how we can use the minimum data to detect dangerous messages. We wanted to understand how different characteristics, or metadata, of high-risk conversations, such as conversation length, average response time, and the human relationships of those involved in the conversation, could affect the detection of these risks by machine learning programs. For example, previous research has shown that risky conversations tend to be short and one-sided, such as when strangers make unwanted advances.

We found that our machine learning program was able to identify dangerous conversations 87% of the time using only conversation metadata. However, analyzing the text, images and videos of the conversations is the most effective way to identify the type and severity of the risk.

These results highlight the importance of metadata in distinguishing dangerous conversations and could be used as guidance for platforms planning to identify AI risks. Platforms could use high-level features such as metadata to block harmful content without verifying the content, thereby violating user privacy. For example, persistent harassment that a young person wants to avoid would generate metadata—repeated, short, one-sided communications between users—that an AI system could use to block the harasser.

Ideally, youth and their caregivers would be given the option to enable encryption, risk detection, or both, so that they can decide for themselves the trade-offs between privacy and security. (Talk) AMS

Related posts

Leave a Comment